Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS Comput Biol ; 19(10): e1011584, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37903158

RESUMO

Applications of generative models for genomic data have gained significant momentum in the past few years, with scopes ranging from data characterization to generation of genomic segments and functional sequences. In our previous study, we demonstrated that generative adversarial networks (GANs) and restricted Boltzmann machines (RBMs) can be used to create novel high-quality artificial genomes (AGs) which can preserve the complex characteristics of real genomes such as population structure, linkage disequilibrium and selection signals. However, a major drawback of these models is scalability, since the large feature space of genome-wide data increases computational complexity vastly. To address this issue, we implemented a novel convolutional Wasserstein GAN (WGAN) model along with a novel conditional RBM (CRBM) framework for generating AGs with high SNP number. These networks implicitly learn the varying landscape of haplotypic structure in order to capture complex correlation patterns along the genome and generate a wide diversity of plausible haplotypes. We performed comparative analyses to assess both the quality of these generated haplotypes and the amount of possible privacy leakage from the training data. As the importance of genetic privacy becomes more prevalent, the need for effective privacy protection measures for genomic data increases. We used generative neural networks to create large artificial genome segments which possess many characteristics of real genomes without substantial privacy leakage from the training dataset. In the near future, with further improvements in haplotype quality and privacy preservation, large-scale artificial genome databases can be assembled to provide easily accessible surrogates of real databases, allowing researchers to conduct studies with diverse genomic data within a safe ethical framework in terms of donor privacy.


Assuntos
Genômica , Aprendizagem , Bases de Dados Factuais , Haplótipos , Redes Neurais de Computação
2.
Bioinformatics ; 39(1)2023 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-36445000

RESUMO

MOTIVATION: We present dnadna, a flexible python-based software for deep learning inference in population genetics. It is task-agnostic and aims at facilitating the development, reproducibility, dissemination and re-usability of neural networks designed for population genetic data. RESULTS: dnadna defines multiple user-friendly workflows. First, users can implement new architectures and tasks, while benefiting from dnadna utility functions, training procedure and test environment, which saves time and decreases the likelihood of bugs. Second, the implemented networks can be re-optimized based on user-specified training sets and/or tasks. Newly implemented architectures and pre-trained networks are easily shareable with the community for further benchmarking or other applications. Finally, users can apply pre-trained networks in order to predict evolutionary history from alternative real or simulated genetic datasets, without requiring extensive knowledge in deep learning or coding in general. dnadna comes with a peer-reviewed, exchangeable neural network, allowing demographic inference from SNP data, that can be used directly or retrained to solve other tasks. Toy networks are also available to ease the exploration of the software, and we expect that the range of available architectures will keep expanding thanks to community contributions. AVAILABILITY AND IMPLEMENTATION: dnadna is a Python (≥3.7) package, its repository is available at gitlab.com/mlgenetics/dnadna and its associated documentation at mlgenetics.gitlab.io/dnadna/.


Assuntos
Aprendizado Profundo , Reprodutibilidade dos Testes , Redes Neurais de Computação , Software , Genética Populacional
3.
Mol Ecol Resour ; 21(8): 2645-2660, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32644216

RESUMO

For the past decades, simulation-based likelihood-free inference methods have enabled researchers to address numerous population genetics problems. As the richness and amount of simulated and real genetic data keep increasing, the field has a strong opportunity to tackle tasks that current methods hardly solve. However, high data dimensionality forces most methods to summarize large genomic data sets into a relatively small number of handcrafted features (summary statistics). Here, we propose an alternative to summary statistics, based on the automatic extraction of relevant information using deep learning techniques. Specifically, we design artificial neural networks (ANNs) that take as input single nucleotide polymorphic sites (SNPs) found in individuals sampled from a single population and infer the past effective population size history. First, we provide guidelines to construct artificial neural networks that comply with the intrinsic properties of SNP data such as invariance to permutation of haplotypes, long scale interactions between SNPs and variable genomic length. Thanks to a Bayesian hyperparameter optimization procedure, we evaluate the performance of multiple networks and compare them to well-established methods like Approximate Bayesian Computation (ABC). Even without the expert knowledge of summary statistics, our approach compares fairly well to an ABC approach based on handcrafted features. Furthermore, we show that combining deep learning and ABC can improve performance while taking advantage of both frameworks. Finally, we apply our approach to reconstruct the effective population size history of cattle breed populations.


Assuntos
Aprendizado Profundo , Modelos Genéticos , Animais , Teorema de Bayes , Bovinos , Simulação por Computador , Genética Populacional , Densidade Demográfica
4.
Front Big Data ; 3: 1, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33693376

RESUMO

The forecast of tropical cyclone trajectories is crucial for the protection of people and property. Although forecast dynamical models can provide high-precision short-term forecasts, they are computationally demanding, and current statistical forecasting models have much room for improvement given that the database of past hurricanes is constantly growing. Machine learning methods, that can capture non-linearities and complex relations, have only been scarcely tested for this application. We propose a neural network model fusing past trajectory data and reanalysis atmospheric images (wind and pressure 3D fields). We use a moving frame of reference that follows the storm center for the 24 h tracking forecast. The network is trained to estimate the longitude and latitude displacement of tropical cyclones and depressions from a large database from both hemispheres (more than 3,000 storms since 1979, sampled at a 6 h frequency). The advantage of the fused network is demonstrated and a comparison with current forecast models shows that deep learning methods could provide a valuable and complementary prediction. Moreover, our method can give a forecast for a new storm in a few seconds, which is an important asset for real-time forecasts compared to traditional forecasts.

5.
Sensors (Basel) ; 17(7)2017 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-28661440

RESUMO

Visual activity recognition plays a fundamental role in several research fields as a way to extract semantic meaning of images and videos. Prior work has mostly focused on classification tasks, where a label is given for a video clip. However, real life scenarios require a method to browse a continuous video flow, automatically identify relevant temporal segments and classify them accordingly to target activities. This paper proposes a knowledge-driven event recognition framework to address this problem. The novelty of the method lies in the combination of a constraint-based ontology language for event modeling with robust algorithms to detect, track and re-identify people using color-depth sensing (Kinect® sensor). This combination enables to model and recognize longer and more complex events and to incorporate domain knowledge and 3D information into the same models. Moreover, the ontology-driven approach enables human understanding of system decisions and facilitates knowledge transfer across different scenes. The proposed framework is evaluated with real-world recordings of seniors carrying out unscripted, daily activities at hospital observation rooms and nursing homes. Results demonstrated that the proposed framework outperforms state-of-the-art methods in a variety of activities and datasets, and it is robust to variable and low-frame rate recordings. Further work will investigate how to extend the proposed framework with uncertainty management techniques to handle strong occlusion and ambiguous semantics, and how to exploit it to further support medicine on the timely diagnosis of cognitive disorders, such as Alzheimer's disease.

6.
IEEE Trans Image Process ; 23(9): 3829-40, 2014 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-25020092

RESUMO

We propose a new method for joint segmentation of monotonously growing or shrinking shapes in a time sequence of noisy images. The task of segmenting the image time series is expressed as an optimization problem using the spatio-temporal graph of pixels, in which we are able to impose the constraint of shape growth or of shrinkage by introducing monodirectional infinite links connecting pixels at the same spatial locations in successive image frames. The globally optimal solution is computed with a graph cut. The performance of the proposed method is validated on three applications: segmentation of melting sea ice floes and of growing burned areas from time series of 2D satellite images, and segmentation of a growing brain tumor from sequences of 3D medical scans. In the latter application, we impose an additional intersequences inclusion constraint by adding directed infinite links between pixels of dependent image structures.

7.
J Nucl Med ; 49(11): 1875-83, 2008 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-18927326

RESUMO

UNLABELLED: For quantitative PET information, correction of tissue photon attenuation is mandatory. Generally in conventional PET, the attenuation map is obtained from a transmission scan, which uses a rotating radionuclide source, or from the CT scan in a combined PET/CT scanner. In the case of PET/MRI scanners currently under development, insufficient space for the rotating source exists; the attenuation map can be calculated from the MR image instead. This task is challenging because MR intensities correlate with proton densities and tissue-relaxation properties, rather than with attenuation-related mass density. METHODS: We used a combination of local pattern recognition and atlas registration, which captures global variation of anatomy, to predict pseudo-CT images from a given MR image. These pseudo-CT images were then used for attenuation correction, as the process would be performed in a PET/CT scanner. RESULTS: For human brain scans, we show on a database of 17 MR/CT image pairs that our method reliably enables estimation of a pseudo-CT image from the MR image alone. On additional datasets of MRI/PET/CT triplets of human brain scans, we compare MRI-based attenuation correction with CT-based correction. Our approach enables PET quantification with a mean error of 3.2% for predefined regions of interest, which we found to be clinically not significant. However, our method is not specific to brain imaging, and we show promising initial results on 1 whole-body animal dataset. CONCLUSION: This method allows reliable MRI-based attenuation correction for human brain scans. Further work is necessary to validate the method for whole-body imaging.


Assuntos
Artefatos , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Tomografia por Emissão de Pósitrons/métodos , Bases de Dados Factuais , Humanos , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X , Imagem Corporal Total
8.
Neuroimage ; 23 Suppl 1: S46-55, 2004.
Artigo em Inglês | MEDLINE | ID: mdl-15501100

RESUMO

We survey the recent activities of the Odyssée Laboratory in the area of the application of mathematics to the design of models for studying brain anatomy and function. We start with the problem of reconstructing sources in MEG and EEG, and discuss the variational approach we have developed for solving these inverse problems. This motivates the need for geometric models of the head. We present a method for automatically and accurately extracting surface meshes of several tissues of the head from anatomical magnetic resonance (MR) images. Anatomical connectivity can be extracted from diffusion tensor magnetic resonance images but, in the current state of the technology, it must be preceded by a robust estimation and regularization stage. We discuss our work based on variational principles and show how the results can be used to track fibers in the white matter (WM) as geodesics in some Riemannian space. We then go to the statistical modeling of functional magnetic resonance imaging (fMRI) signals from the viewpoint of their decomposition in a pseudo-deterministic and stochastic part that we then use to perform clustering of voxels in a way that is inspired by the theory of support vector machines and in a way that is grounded in information theory. Multimodal image matching is discussed next in the framework of image statistics and partial differential equations (PDEs) with an eye on registering fMRI to the anatomy. The paper ends with a discussion of a new theory of random shapes that may prove useful in building anatomical and functional atlases.


Assuntos
Encéfalo/anatomia & histologia , Encéfalo/fisiologia , Algoritmos , Mapeamento Encefálico , Simulação por Computador , Imagem de Difusão por Ressonância Magnética , Humanos , Magnetoencefalografia , Modelos Anatômicos , Modelos Estatísticos , Vias Neurais/anatomia & histologia , Vias Neurais/citologia , Retina/anatomia & histologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...